Stochastic Variance-Reduced Cubic Regularized Newton Method
نویسندگان
چکیده
We propose a stochastic variance-reduced cubic regularized Newton method for non-convex optimization. At the core of our algorithm is a novel semi-stochastic gradient along with a semi-stochastic Hessian, which are specifically designed for cubic regularization method. We show that our algorithm is guaranteed to converge to an ( , √ )-approximately local minimum within Õ(n/ ) second-order oracle calls, which outperforms the state-of-the-art cubic regularization algorithms including subsampled cubic regularization. Our work also sheds light on the application of variance reduction technique to high-order non-convex optimization methods. Thorough experiments on various non-convex optimization problems support our theory.
منابع مشابه
Sample Complexity of Stochastic Variance-Reduced Cubic Regularization for Nonconvex Optimization
The popular cubic regularization (CR) method converges with firstand second-order optimality guarantee for nonconvex optimization, but encounters a high sample complexity issue for solving large-scale problems. Various sub-sampling variants of CR have been proposed to improve the sample complexity. In this paper, we propose a stochastic variance-reduced cubic-regularized (SVRC) Newton’s method ...
متن کاملStochastic Cubic Regularization for Fast Nonconvex Optimization
This paper proposes a stochastic variant of a classic algorithm—the cubic-regularized Newton method [Nesterov and Polyak, 2006]. The proposed algorithm efficiently escapes saddle points and finds approximate local minima for general smooth, nonconvex functions in only Õ( −3.5) stochastic gradient and stochastic Hessian-vector product evaluations. The latter can be computed as efficiently as sto...
متن کاملTraining L1-Regularized Models with Orthant-Wise Passive Descent Algorithms
The `1-regularized sparse model has been popular in machine learning society. The orthant-wise quasi-Newton (OWL-QN) method is a representative fast algorithm for training the model. However, the proof of the convergence has been pointed out to be incorrect by multiple sources, and up until now, its convergence has not been proved at all. In this paper, we propose a stochastic OWL-QN method for...
متن کاملA Variance Reduced Stochastic Newton Method
Quasi-Newton methods are widely used in practise for convex loss minimization problems. These methods exhibit good empirical performance on a wide variety of tasks and enjoy super-linear convergence to the optimal solution. For largescale learning problems, stochastic Quasi-Newton methods have been recently proposed. However, these typically only achieve sub-linear convergence rates and have no...
متن کاملErratum to: A regularized Newton method without line search for unconstrained optimization
For unconstrained optimization, Newton-type methods have good convergence properties, and areused in practice. The Newton’s method combined with a trust-region method (the TR-Newtonmethod), the cubic regularization of Newton’s method and the regularized Newton method withline search methods are such Newton-type methods. The TR-Newton method and the cubic regu-larization of N...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- CoRR
دوره abs/1802.04796 شماره
صفحات -
تاریخ انتشار 2018